61 research outputs found

    Task-based Augmented Contour Trees with Fibonacci Heaps

    Full text link
    This paper presents a new algorithm for the fast, shared memory, multi-core computation of augmented contour trees on triangulations. In contrast to most existing parallel algorithms our technique computes augmented trees, enabling the full extent of contour tree based applications including data segmentation. Our approach completely revisits the traditional, sequential contour tree algorithm to re-formulate all the steps of the computation as a set of independent local tasks. This includes a new computation procedure based on Fibonacci heaps for the join and split trees, two intermediate data structures used to compute the contour tree, whose constructions are efficiently carried out concurrently thanks to the dynamic scheduling of task parallelism. We also introduce a new parallel algorithm for the combination of these two trees into the output global contour tree. Overall, this results in superior time performance in practice, both in sequential and in parallel thanks to the OpenMP task runtime. We report performance numbers that compare our approach to reference sequential and multi-threaded implementations for the computation of augmented merge and contour trees. These experiments demonstrate the run-time efficiency of our approach and its scalability on common workstations. We demonstrate the utility of our approach in data segmentation applications

    Jacobi Fiber Surfaces for Bivariate Reeb Space Computation

    Get PDF
    This paper presents an efficient algorithm for the computation of the Reeb space of an input bivariate piecewise linear scalar function f defined on a tetrahedral mesh. By extending and generalizing algorithmic concepts from the univariate case to the bivariate one, we report the first practical, output-sensitive algorithm for the exact computation of such a Reeb space. The algorithm starts by identifying the Jacobi set of f , the bivariate analogs of critical points in the univariate case. Next, the Reeb space is computed by segmenting the input mesh along the new notion of Jacobi Fiber Surfaces, the bivariate analog of critical contours in the univariate case. We additionally present a simplification heuristic that enables the progressive coarsening of the Reeb space. Our algorithm is simple to implement and most of its computations can be trivially parallelized. We report performance numbers demonstrating orders of magnitude speedups over previous approaches, enabling for the first time the tractable computation of bivariate Reeb spaces in practice. Moreover, unlike range-based quantization approaches (such as the Joint Contour Net), our algorithm is parameter-free. We demonstrate the utility of our approach by using the Reeb space as a semi-automatic segmentation tool for bivariate data. In particular, we introduce continuous scatterplot peeling, a technique which enables the reduction of the cluttering in the continuous scatterplot, by interactively selecting the features of the Reeb space to project. We provide a VTK-based C++ implementation of our algorithm that can be used for reproduction purposes or for the development of new Reeb space based visualization techniques

    Fast and Exact Fiber Surfaces for Tetrahedral Meshes

    Get PDF
    Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results

    Exascale Computing Deployment Challenges

    Get PDF
    As Exascale computing proliferates, we see an accelerating shift towards clusters with thousands of nodes and thousands of cores per node, often on the back of commodity graphics processing units. This paper argues that this drives a once in a generation shift of computation, and that fundamentals of computer science therefore need to be re-examined. Exploiting the full power of Exascale computation will require attention to the fundamentals of programme design and specification, programming language design, systems and software engineering, analytic, performance and cost models, fundamental algorithmic design, and to increasing replacement of human bandwidth by computational analysis. As part of this, we will argue that Exascale computing will require a significant degree of co-design and close attention to the economics underlying the challenges ahead

    Habitat preferences, foraging behaviour and bycatch risk among breeding sooty shearwaters Ardenna grisea in the Southwest Atlantic.

    Get PDF
    Pelagic seabirds are important components of many marine ecosystems. The most abundant species are medium/small sized petrels (<1100 g), yet the sub-mesoscale (<10 km) distribution, habitat use and foraging behaviour of this group are not well understood. Sooty shearwaters Ardenna grisea are among the world’s most numerous pelagic seabirds. The majority inhabit the Pacific, where they have declined, partly due to bycatch and other anthropogenic impacts, but they are increasing in the Atlantic. To evaluate the sub-mesoscale habitat preferences (i.e. the disproportionality between habitat use and availability), diving behaviour and bycatch risk of Atlantic breeders, we tracked sooty shearwaters from the Falkland Islands during late incubation and early chick-rearing with GPS loggers (n = 20), geolocators (n = 10) and time-depth recorders (n = 10). These birds foraged exclusively in neritic and shelf-break waters, principally over the Burdwood Bank, ~350 km from their colony. Like New Zealand breeders, they dived mostly during daylight, especially at dawn and dusk, consistent with the exploitation of vertically migrating prey. However, Falkland birds made shorter foraging trips, shallower dives, and did not forage in oceanic waters. Their overlap with fisheries was low, and they foraged at shallower depths than those targeted by trawlers, the most frequent fishing vessels encountered, indicating that bycatch risk was low during late incubation/early chick-rearing. Although our results should be treated with caution, they indicate that Atlantic and Pacific sooty shearwaters may experience markedly differing pressures at sea. Comparative study between these populations, e.g. combining biologging and demography, is therefore warranted

    Hierarchies and Ranks for Persistence Pairs

    Full text link
    We develop a novel hierarchy for zero-dimensional persistence pairs, i.e., connected components, which is capable of capturing more fine-grained spatial relations between persistence pairs. Our work is motivated by a lack of spatial relationships between features in persistence diagrams, leading to a limited expressive power. We build upon a recently-introduced hierarchy of pairs in persistence diagrams that augments the pairing stored in persistence diagrams with information about which components merge. Our proposed hierarchy captures differences in branching structure. Moreover, we show how to use our hierarchy to measure the spatial stability of a pairing and we define a rank function for persistence pairs and demonstrate different applications.Comment: Topology-based Methods in Visualization 201

    Water-assisted laser desorption/ionization mass spectrometry for minimally invasive in vivo and real-time surface analysis using SpiderMass

    Get PDF
    Rapid, sensitive, precise and accurate analysis of samples in their native in vivo environment is critical to better decipher physiological and physiopathological mechanisms. SpiderMass is an ambient mass spectrometry (MS) system designed for mobile in vivo and real-time surface analyses of biological tissues. The system uses a fibered laser, which is tuned to excite the most intense vibrational band of water, resulting in a process termed water-assisted laser desorption/ionization (WALDI). The water molecules act as an endogenous matrix in a matrix-assisted laser desorption ionization (MALDI)-like scenario, leading to the desorption/ionization of biomolecules (lipids, metabolites and proteins). The ejected material is transferred to the mass spectrometer through an atmospheric interface and a transfer line that is several meters long. Here, we formulate a three-stage procedure that includes (i) a laser system setup coupled to a Waters Q-TOF or Thermo Fisher Q Exactive mass analyzer, (ii) analysis of specimens and (iii) data processing. We also describe the optimal setup for the analysis of cell cultures, fresh-frozen tissue sections and in vivo experiments on skin. With proper optimization, the system can be used for a variety of different targets and applications. The entire procedure takes 1–2 d for complex samples
    corecore